21 research outputs found
A new kernel-based approach for overparameterized Hammerstein system identification
In this paper we propose a new identification scheme for Hammerstein systems,
which are dynamic systems consisting of a static nonlinearity and a linear
time-invariant dynamic system in cascade. We assume that the nonlinear function
can be described as a linear combination of basis functions. We reconstruct
the coefficients of the nonlinearity together with the first samples of
the impulse response of the linear system by estimating an -dimensional
overparameterized vector, which contains all the combinations of the unknown
variables. To avoid high variance in these estimates, we adopt a regularized
kernel-based approach and, in particular, we introduce a new kernel tailored
for Hammerstein system identification. We show that the resulting scheme
provides an estimate of the overparameterized vector that can be uniquely
decomposed as the combination of an impulse response and coefficients of
the static nonlinearity. We also show, through several numerical experiments,
that the proposed method compares very favorably with two standard methods for
Hammerstein system identification.Comment: 17 pages, submitted to IEEE Conference on Decision and Control 201
Kernel-based system identification from noisy and incomplete input-output data
In this contribution, we propose a kernel-based method for the identification
of linear systems from noisy and incomplete input-output datasets. We model the
impulse response of the system as a Gaussian process whose covariance matrix is
given by the recently introduced stable spline kernel. We adopt an empirical
Bayes approach to estimate the posterior distribution of the impulse response
given the data. The noiseless and missing data samples, together with the
kernel hyperparameters, are estimated maximizing the joint marginal likelihood
of the input and output measurements. To compute the marginal-likelihood
maximizer, we build a solution scheme based on the Expectation-Maximization
method. Simulations on a benchmark dataset show the effectiveness of the
method.Comment: 16 pages, submitted to IEEE Conference on Decision and Control 201
Parameter elimination in particle Gibbs sampling
Bayesian inference in state-space models is challenging due to
high-dimensional state trajectories. A viable approach is particle Markov chain
Monte Carlo, combining MCMC and sequential Monte Carlo to form "exact
approximations" to otherwise intractable MCMC methods. The performance of the
approximation is limited to that of the exact method. We focus on particle
Gibbs and particle Gibbs with ancestor sampling, improving their performance
beyond that of the underlying Gibbs sampler (which they approximate) by
marginalizing out one or more parameters. This is possible when the parameter
prior is conjugate to the complete data likelihood. Marginalization yields a
non-Markovian model for inference, but we show that, in contrast to the general
case, this method still scales linearly in time. While marginalization can be
cumbersome to implement, recent advances in probabilistic programming have
enabled its automation. We demonstrate how the marginalized methods are viable
as efficient inference backends in probabilistic programming, and demonstrate
with examples in ecology and epidemiology
Bayesian learning of structured dynamical systems
In this thesis, we propose some Bayesian approaches to the identificationof structured dynamical systems. In particular, we consider block-orientedmodels in which a complex system is built starting from simple linear andnonlinear building blocks. Each building block has a Gaussian-process modelthat can be used to include prior information into the learning problem.The learning is then guided by Bayesâ theorem. In particular, we use anempirical Bayes approach to perform the identification of models with hyper-parameters. As the models considered in this thesis are, in general, intractable,we propose several approximation methods based on variational Bayes andMarkov-chain Monte Carlo sampling. To estimate the hyperpameters, wepropose iterative algorithms based on variational expectation maximizationand stochastic-approximation expectation maximization.The main contribution of the thesis is developed in Part II. Here, we firststudy uncertain-input systems and Wiener systems as the typical Gaussian-process models of two-block cascades. In addition, we propose a robust ap-proach for uncertain-input systems with outliers in the measurements. Then,we proceed considering more complex structures such as acyclic networksof linear dynamical systems, feedback interconnections of linear systems,and three-block nonlinear structures such as the Wiener-Hammerstein andHammerstein-Wiener cascades. Finally, we consider some problems relatedto quantized measurements: we propose an approximate estimator and weprovide a rigorous analysis of the statistical properties of quantization noise.All the models and methods are discussed in detail and accompanied byalgorithms and implementation details. The proposed techniques are shownin several simulation examples.QC 20181107</p
System identification with input uncertainties : an EM kernel-based approach
Many classical problems in system identification, such as the classical predictionerror method and regularized system identification, identification of Hammersteinand cascaded systems, blind system identification, as well as errors-in-variablesproblems and estimation with missing data, can be seen as particular instancesof the general problem of the identification of systems with limited information.In this thesis, we introduce a framework for the identification of linear dynamicalsystems subject to inputs that are not perfectly known. We present the class ofuncertain-input modelsâthat is, linear systems subject to inputs about which onlylimited information is available. Using the Gaussian-process framework, we modelthe uncertain input as the realization of a Gaussian process. Similarly, we model theimpulse response of the linear system as the realization of a Gaussian process. Usingthe mean and covariance functions of the Gaussian processes, we can incorporateprior information about the system in the model. Interpreting the Gaussian processmodels as prior distributions of the unknowns, we can find the minimum mean-square-error estimates of the input and of the impulse response of the system. Theseestimates depend on some parameters, called hyperparameters, that need to beestimated from the available data. Using an empirical Bayes approach, we estimatethe hyperparameters from the marginal likelihood of the data. The maximizationof the marginal likelihood is carried out using an iterative scheme based on theExpectation-Maximization method. Depending on the assumptions made on themodels of the input and of the system, the standard E-step may not be available inclosed form. In this case, the E-step is replaced with a Markov Chain Monte Carlointegration scheme based on the Gibbs sampler. After showing how to estimate thesystem and the hyperparameters, we show how to specialize the general uncertain-input model to particular structures and how to modify the general estimationmethod to account for these particular structures. In the last chapter, we show inwhat sense the aforementioned classical system identification problems can be seenas uncertain-input model identification problems; we show the effectiveness of theframework in dealing with these classical problems in several numerical examples.QC 20160520</p
Bayesian learning of structured dynamical systems
In this thesis, we propose some Bayesian approaches to the identificationof structured dynamical systems. In particular, we consider block-orientedmodels in which a complex system is built starting from simple linear andnonlinear building blocks. Each building block has a Gaussian-process modelthat can be used to include prior information into the learning problem.The learning is then guided by Bayesâ theorem. In particular, we use anempirical Bayes approach to perform the identification of models with hyper-parameters. As the models considered in this thesis are, in general, intractable,we propose several approximation methods based on variational Bayes andMarkov-chain Monte Carlo sampling. To estimate the hyperpameters, wepropose iterative algorithms based on variational expectation maximizationand stochastic-approximation expectation maximization.The main contribution of the thesis is developed in Part II. Here, we firststudy uncertain-input systems and Wiener systems as the typical Gaussian-process models of two-block cascades. In addition, we propose a robust ap-proach for uncertain-input systems with outliers in the measurements. Then,we proceed considering more complex structures such as acyclic networksof linear dynamical systems, feedback interconnections of linear systems,and three-block nonlinear structures such as the Wiener-Hammerstein andHammerstein-Wiener cascades. Finally, we consider some problems relatedto quantized measurements: we propose an approximate estimator and weprovide a rigorous analysis of the statistical properties of quantization noise.All the models and methods are discussed in detail and accompanied byalgorithms and implementation details. The proposed techniques are shownin several simulation examples.QC 20181107</p